|
 |
 |
To access the contents, click the chapter and section titles.
Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96
Oracle Tuning
Carefully analyze these areas to determine whether adjustment to these parameters is necessary:
- DB_BLOCK_BUFFERS. Block size is not nearly as critical in a data warehouse as it is in the OLTP and batch systems. However, having an insufficient number of block buffers results in higher-than-normal I/O rates and possibly an I/O bottleneck. The statistics for the buffer cache are kept in the dynamic performance table V$SYSSTAT. The ratio of PHYSICAL READS to DB BLOCK GETS and CONSISTENT GETS is the cache-miss ratio. This number should be minimized.
- Library cache. Check The V$LIBRARYCACHE table, which contains statistics about how well you are using the library cache. The important columns to view in this table are PINS and RELOADS. A low number of reloads relative to the number of executions indicates a high cache-hit rate. You should be able to reduce the library cache misses by increasing the amount of memory available for the library cache. Do so by increasing the Oracle tunable parameter SHARED_POOL_SIZE. Try increasing the size of the shared pool by 10 percent and monitor it again. If this is not sufficient, increase the size of the shared pool by another 10 percent and continue in this manner until you are satisfied with the result.
- Multiblock reads. Because many of the queries will involve table scans, make sure that you take advantage of multiblock reads. The number of blocks read in a multiblock read is specified by the Oracle initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT. This value, multiplied by the DB_BLOCK_SIZE parameter, results in the size of the I/Os. A good value for the I/O size is 64K.
- Cursor space for time. If you have plenty of memory, you can speed access to the shared SQL areas by setting the Oracle initialization parameter CURSOR_SPACE_FOR_TIME to TRUE.
- Data dictionary cache. To determine the efficiency of the data dictionary cache, check the dynamic performance table V$ROWCACHE. The important columns to view in this table are GETS and GETMISSES. The ratio of GETMISSES to GETS should be low.
- Rollback segments. Depending on the number of updates, you may want to increase the number of rollback segments to reduce contention. Unlike an OLTP system, in data warehousing systems, you are better off with more rollback segments of a larger size. The size of the rollback segments depends on the size of the transactions. Because of the type of transactions usually associated with a data warehouse, your rollback segments should be fairly large. Rollback contention occurs when too many transactions try to use the same rollback segment at the same time; some of them have to wait. You can tell if you have contention on rollback segments by looking at the dynamic performance table V$WAITSTAT. Check for an excessive number of UNDO HEADERs, UNDO BLOCKs, SYSTEM UNDO HEADERs, and SYSTEM UNDO BLOCKs. Compare these values to the total number of requests for data. If the number is high, you need more rollback segments.
- Latch contention. You can determine whether latch contention is a problem by examining the dynamic performance table V$LATCH. Look for the ratio of MISSES to GETS, the number of SLEEPS, and the ratio of IMMEDIATE_MISSES to IMMEDIATE_GETS. If the miss ratio is high, reduce the size of LOG_SMALL_ENTRY_MAX_SIZE to minimize the time any user process holds the latch; alternatively, increase the value of LOG_SIMULTANEOUS_COPIES to reduce contention by adding more redo copy latches. If neither of these approaches helps, set the initialization parameter LOG_ENTRY_PREBUILD_THRESHOLD. Any redo entry of a smaller size than this parameter must be prebuilt, which reduces the time the latch is held.
- Checkpoints. You may have to tune checkpoints under certain circumstances. Although this is usually not necessary, if you see severely degraded performance during checkpoints, you can reduce this effect by enabling the CKPT process. Do so by setting the Oracle initialization parameter CHECKPOINT_PROCESS to TRUE.
- Archiving. By adjusting the initialization parameters LOG_ARCHIVE_BUFFERS and LOG_ARCHIVE_BUFFER_SIZE, you can either slow down or speed up the performance of archiving. By speeding up archiving, the effect on the system is of a shorter duration but is more noticeable. Slowing down archiving lengthens the duration but reduces the effect.
These are some of the areas you should pay particular attention to when tuning a system for decision support queries in a data warehousing system. Probably the areas that require the most attention are I/O and memory because these two are so closely related. By optimizing the use of memory, you may be able to reduce I/Os (which are probably running near the limitations of the hardware).
Server OS Tuning
You may have to tune the server OS to provide for a large number of processes (if you are using the Parallel Query option) and optimal I/O performance. Some of the things you may have to tune in the server OS are listed here; remember that some OSes may not require any tuning in these areas:
- Memory. Tune the system to reduce unnecessary memory usage so that Oracle can use as much of the systems memory as possible for the SGA and server processes. You may also need significant amounts of memory for sorts.
- Memory enhancements. Take advantage of 4M pages and ISM, if they are available. Both features can improve Oracle performance in a data warehouse environment.
- I/O. If necessary, tune I/O to allow for optimal performance and use of AIO.
- Scheduler. If possible, turn off preemptive scheduling and load balancing. In a data warehousing system, allowing a process to run to completion (that is, so that it is not preempted) is beneficial.
- Cache affinity. You may see some benefits from cache affinity in a data warehousing system because the processes tend to run somewhat longer.
The server operating system is mainly a host on which Oracle does its job. Any work done by the operating system is essentially overhead for Oracle. By optimizing code paths and reducing OS overhead, you can enhance Oracle performance.
|